89 research outputs found

    A decade of clinical negligence in ophthalmology

    Get PDF
    Abstract Background To present an overview of the clinical negligence claims for ophthalmology in the National Health Service (NHS) in England from 1995 to 2006. To compare ophthalmic subspecialties with respect to claim numbers and payments. Methods All the claims on the NHS Litigation Authority database for ophthalmology for the period 1995 to 2006 were analysed. Claims were categorised by ophthalmic subspecialty, and subspecialties were ranked according to numbers of claims, total damages paid, average level of damages and paid:closed ratio (a measure of the likelihood of a claim resulting in payment of damages). Results There were 848 claims, 651 of which were closed. 46% of closed claims resulted in payment of damages. The total cost of damages over the period was £11 million. The mean level of damages was £37,100. Cataract made up the largest share of claims (31%), paediatric ophthalmology had the highest mean damages (£170,000), and claims related to glaucoma were most likely to result in payment of damages (64%). Conclusion Clinical negligence claims in ophthalmology in England are infrequent, but most ophthalmologists will face at least one in their career. Ophthalmic subspecialties show marked differences with regard to their litigation profiles. From a medical protection perspective, these results suggest that indemnity premiums should be tailored according to the subspecialty areas an ophthalmologist is involved in.</p

    Characterization and Comparison of the Leukocyte Transcriptomes of Three Cattle Breeds

    Get PDF
    In this study, mRNA-Seq was used to characterize and compare the leukocyte transcriptomes from two taurine breeds (Holstein and Jersey), and one indicine breed (Cholistani). At the genomic level, we identified breed-specific base changes in protein coding regions. Among 7,793,425 coding bases, only 165 differed between Holstein and Jersey, and 3,383 (0.04%) differed between Holstein and Cholistani, 817 (25%) of which resulted in amino acid changes in 627 genes. At the transcriptional level, we assembled transcripts and estimated their abundances including those from more than 3,000 unannotated intergeneic regions. Differential gene expression analysis showed a high similarity between Holstein and Jersey, and a much greater difference between the taurine breeds and the indicine breed. We identified gene ontology pathways that were systematically altered, including the electron transport chain and immune response pathways that may contribute to different levels of heat tolerance and disease resistance in taurine and indicine breeds. At the post-transcriptional level, sequencing mRNA allowed us to identify a number of genes undergoing differential alternative splicing among different breeds. This study provided a high-resolution survey of the variation between bovine transcriptomes at different levels and may provide important biological insights into the phenotypic differentiation among cattle breeds

    Bullying and Victimization Among Adolescents: The Role of Ethnicity and Ethnic Composition of School Class

    Get PDF
    The present study examined the relationships between ethnicity, peer-reported bullying and victimization, and whether these relationships were moderated by the ethnic composition of the school classes. Participants were 2386 adolescents (mean age: 13 years and 10 months; 51.9% boys) from 117 school classes in the Netherlands. Multilevel analyses showed that, after controlling for the ethnic composition of school class, ethnic minority adolescents were less victimized, but did not differ from the ethnic majority group members on bullying. Victimization was more prevalent in ethnically heterogeneous classes. Furthermore, the results revealed that ethnic minority adolescents bully more in ethnically heterogeneous classes. Our findings suggest that, in order to understand bullying and victimization in schools in ethnically diverse cultures, the ethnic background of adolescents and the ethnic composition of school classes should be taken into account

    Standardized and reproducible methodology for the comprehensive and systematic assessment of surgical resection margins during breast-conserving surgery for invasive breast cancer

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The primary goal of breast-conserving surgery (BCS) is to completely excise the tumor and achieve "adequate" or "negative" surgical resection margins while maintaining an acceptable level of postoperative cosmetic outcome. Nevertheless, precise determination of the adequacy of BCS has long been debated. In this regard, the aim of the current paper was to describe a standardized and reproducible methodology for comprehensive and systematic assessment of surgical resection margins during BCS.</p> <p>Methods</p> <p>Retrospective analysis of 204 BCS procedures performed for invasive breast cancer from August 2003 to June 2007, in which patients underwent a standard BCS resection and systematic sampling of nine standardized re-resection margins (superior, superior-medial, superior-lateral, medial, lateral, inferior, inferior-medial, inferior-lateral, and deep-posterior). Multiple variables (including patient, tumor, specimen, and follow-up variables) were evaluated.</p> <p>Results</p> <p>6.4% (13/204) of patients had positive BCS specimen margins (defined as tumor at inked edge of BCS specimen) and 4.4% (9/204) of patients had close margins (defined as tumor within 1 mm or less of inked edge but not at inked edge of BCS specimen). 11.8% (24/204) of patients had at least one re-resection margin containing additional disease, independent of the status of the BCS specimen margins. 7.1% (13/182) of patients with negative BCS specimen margins (defined as no tumor cells seen within 1 mm or less of inked edge of BCS specimen) had at least one re-resection margin containing additional disease. Thus, 54.2% (13/24) of patients with additional disease in a re-resection margin would not have been recognized by a standard BCS procedure alone (P < 0.001). The nine standardized resection margins represented only 26.8% of the volume of the BCS specimen and 32.6% of the surface area of the BCS specimen.</p> <p>Conclusion</p> <p>Our methodology accurately assesses the adequacy of surgical resection margins for determination of which individuals may need further resection to the affected breast in order to minimize the potential risk of local recurrence while attempting to limit the volume of additional breast tissue excised, as well as to determine which individuals are not realistically amendable to BCS and instead need a completion mastectomy to successfully remove multifocal disease.</p

    A systematic review of the role of vitamin insufficiencies and supplementation in COPD

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Pulmonary inflammation, oxidants-antioxidants imbalance, as well as innate and adaptive immunity have been proposed as playing a key role in the development of COPD. The role of vitamins, as assessed either by food frequency questionnaires or measured in serum levels, have been reported to improve pulmonary function, reduce exacerbations and improve symptoms. Vitamin supplements have therefore been proposed to be a potentially useful additive to COPD therapy.</p> <p>Methods</p> <p>A systematic literature review was performed on the association of vitamins and COPD. The role of vitamin supplements in COPD was then evaluated.</p> <p>Conclusions</p> <p>The results of this review showed that various vitamins (vitamin C, D, E, A, beta and alpha carotene) are associated with improvement in features of COPD such as symptoms, exacerbations and pulmonary function. High vitamin intake would probably reduce the annual decline of FEV1. There were no studies that showed benefit from vitamin supplementation in improved symptoms, decreased hospitalization or pulmonary function.</p

    Obtaining Adequate Surgical Margins in Breast-Conserving Therapy for Patients with Early-Stage Breast Cancer: Current Modalities and Future Directions

    Get PDF
    Inadequate surgical margins represent a high risk for adverse clinical outcome in breast-conserving therapy (BCT) for early-stage breast cancer. The majority of studies report positive resection margins in 20% to 40% of the patients who underwent BCT. This may result in an increased local recurrence (LR) rate or additional surgery and, consequently, adverse affects on cosmesis, psychological distress, and health costs. In the literature, various risk factors are reported to be associated with positive margin status after lumpectomy, which may allow the surgeon to distinguish those patients with a higher a priori risk for re-excision. However, most risk factors are related to tumor biology and patient characteristics, which cannot be modified as such. Therefore, efforts to reduce the number of positive margins should focus on optimizing the surgical procedure itself, because the surgeon lacks real-time intraoperative information on the presence of positive resection margins during breast-conserving surgery. This review presents the status of pre- and intraoperative modalities currently used in BCT. Furthermore, innovative intraoperative approaches, such as positron emission tomography, radioguided occult lesion localization, and near-infrared fluorescence optical imaging, are addressed, which have to prove their potential value in improving surgical outcome and reducing the need for re-excision in BCT

    Prognostic model to predict postoperative acute kidney injury in patients undergoing major gastrointestinal surgery based on a national prospective observational cohort study.

    Get PDF
    Background: Acute illness, existing co-morbidities and surgical stress response can all contribute to postoperative acute kidney injury (AKI) in patients undergoing major gastrointestinal surgery. The aim of this study was prospectively to develop a pragmatic prognostic model to stratify patients according to risk of developing AKI after major gastrointestinal surgery. Methods: This prospective multicentre cohort study included consecutive adults undergoing elective or emergency gastrointestinal resection, liver resection or stoma reversal in 2-week blocks over a continuous 3-month period. The primary outcome was the rate of AKI within 7 days of surgery. Bootstrap stability was used to select clinically plausible risk factors into the model. Internal model validation was carried out by bootstrap validation. Results: A total of 4544 patients were included across 173 centres in the UK and Ireland. The overall rate of AKI was 14·2 per cent (646 of 4544) and the 30-day mortality rate was 1·8 per cent (84 of 4544). Stage 1 AKI was significantly associated with 30-day mortality (unadjusted odds ratio 7·61, 95 per cent c.i. 4·49 to 12·90; P < 0·001), with increasing odds of death with each AKI stage. Six variables were selected for inclusion in the prognostic model: age, sex, ASA grade, preoperative estimated glomerular filtration rate, planned open surgery and preoperative use of either an angiotensin-converting enzyme inhibitor or an angiotensin receptor blocker. Internal validation demonstrated good model discrimination (c-statistic 0·65). Discussion: Following major gastrointestinal surgery, AKI occurred in one in seven patients. This preoperative prognostic model identified patients at high risk of postoperative AKI. Validation in an independent data set is required to ensure generalizability

    Pooled analysis of WHO Surgical Safety Checklist use and mortality after emergency laparotomy

    Get PDF
    Background The World Health Organization (WHO) Surgical Safety Checklist has fostered safe practice for 10 years, yet its place in emergency surgery has not been assessed on a global scale. The aim of this study was to evaluate reported checklist use in emergency settings and examine the relationship with perioperative mortality in patients who had emergency laparotomy. Methods In two multinational cohort studies, adults undergoing emergency laparotomy were compared with those having elective gastrointestinal surgery. Relationships between reported checklist use and mortality were determined using multivariable logistic regression and bootstrapped simulation. Results Of 12 296 patients included from 76 countries, 4843 underwent emergency laparotomy. After adjusting for patient and disease factors, checklist use before emergency laparotomy was more common in countries with a high Human Development Index (HDI) (2455 of 2741, 89.6 per cent) compared with that in countries with a middle (753 of 1242, 60.6 per cent; odds ratio (OR) 0.17, 95 per cent c.i. 0.14 to 0.21, P <0001) or low (363 of 860, 422 per cent; OR 008, 007 to 010, P <0.001) HDI. Checklist use was less common in elective surgery than for emergency laparotomy in high-HDI countries (risk difference -94 (95 per cent c.i. -11.9 to -6.9) per cent; P <0001), but the relationship was reversed in low-HDI countries (+121 (+7.0 to +173) per cent; P <0001). In multivariable models, checklist use was associated with a lower 30-day perioperative mortality (OR 0.60, 0.50 to 073; P <0.001). The greatest absolute benefit was seen for emergency surgery in low- and middle-HDI countries. Conclusion Checklist use in emergency laparotomy was associated with a significantly lower perioperative mortality rate. Checklist use in low-HDI countries was half that in high-HDI countries.Peer reviewe

    Global variation in anastomosis and end colostomy formation following left-sided colorectal resection

    Get PDF
    Background End colostomy rates following colorectal resection vary across institutions in high-income settings, being influenced by patient, disease, surgeon and system factors. This study aimed to assess global variation in end colostomy rates after left-sided colorectal resection. Methods This study comprised an analysis of GlobalSurg-1 and -2 international, prospective, observational cohort studies (2014, 2016), including consecutive adult patients undergoing elective or emergency left-sided colorectal resection within discrete 2-week windows. Countries were grouped into high-, middle- and low-income tertiles according to the United Nations Human Development Index (HDI). Factors associated with colostomy formation versus primary anastomosis were explored using a multilevel, multivariable logistic regression model. Results In total, 1635 patients from 242 hospitals in 57 countries undergoing left-sided colorectal resection were included: 113 (6·9 per cent) from low-HDI, 254 (15·5 per cent) from middle-HDI and 1268 (77·6 per cent) from high-HDI countries. There was a higher proportion of patients with perforated disease (57·5, 40·9 and 35·4 per cent; P < 0·001) and subsequent use of end colostomy (52·2, 24·8 and 18·9 per cent; P < 0·001) in low- compared with middle- and high-HDI settings. The association with colostomy use in low-HDI settings persisted (odds ratio (OR) 3·20, 95 per cent c.i. 1·35 to 7·57; P = 0·008) after risk adjustment for malignant disease (OR 2·34, 1·65 to 3·32; P < 0·001), emergency surgery (OR 4·08, 2·73 to 6·10; P < 0·001), time to operation at least 48 h (OR 1·99, 1·28 to 3·09; P = 0·002) and disease perforation (OR 4·00, 2·81 to 5·69; P < 0·001). Conclusion Global differences existed in the proportion of patients receiving end stomas after left-sided colorectal resection based on income, which went beyond case mix alone

    Evaluation of appendicitis risk prediction models in adults with suspected appendicitis

    Get PDF
    Background Appendicitis is the most common general surgical emergency worldwide, but its diagnosis remains challenging. The aim of this study was to determine whether existing risk prediction models can reliably identify patients presenting to hospital in the UK with acute right iliac fossa (RIF) pain who are at low risk of appendicitis. Methods A systematic search was completed to identify all existing appendicitis risk prediction models. Models were validated using UK data from an international prospective cohort study that captured consecutive patients aged 16–45 years presenting to hospital with acute RIF in March to June 2017. The main outcome was best achievable model specificity (proportion of patients who did not have appendicitis correctly classified as low risk) whilst maintaining a failure rate below 5 per cent (proportion of patients identified as low risk who actually had appendicitis). Results Some 5345 patients across 154 UK hospitals were identified, of which two‐thirds (3613 of 5345, 67·6 per cent) were women. Women were more than twice as likely to undergo surgery with removal of a histologically normal appendix (272 of 964, 28·2 per cent) than men (120 of 993, 12·1 per cent) (relative risk 2·33, 95 per cent c.i. 1·92 to 2·84; P < 0·001). Of 15 validated risk prediction models, the Adult Appendicitis Score performed best (cut‐off score 8 or less, specificity 63·1 per cent, failure rate 3·7 per cent). The Appendicitis Inflammatory Response Score performed best for men (cut‐off score 2 or less, specificity 24·7 per cent, failure rate 2·4 per cent). Conclusion Women in the UK had a disproportionate risk of admission without surgical intervention and had high rates of normal appendicectomy. Risk prediction models to support shared decision‐making by identifying adults in the UK at low risk of appendicitis were identified
    corecore