102 research outputs found

    Mathematics difficulties in extremely preterm children : evidence of a specific deficit in basic mathematics processing

    Get PDF
    Background: Extremely preterm (EP, <26 wk gestation) children have been observed to have poor academic achievement in comparison to their term-born peers, especially in mathematics. This study investigated potential underlying causes of this difficulty. Methods: A total of 219 EP participants were compared with 153 term-born control children at 11 y of age. All children were assessed by a psychologist on a battery of standardized cognitive tests and a number estimation test assessing children’s numerical representations. Results: EP children underperformed in all tests in comparison with the term controls (the majority of Ps < 0.001). Different underlying relationships between performance on the number estimation test and mathematical achievement were found in EP as compared with control children. That is, even after controlling for cognitive ability, a relationship between number representations and mathematical performance persisted for EP children only (EP: r = 0.346, n = 186, P < 0.001; control: r = 0.095, n = 146, P = 0.256). Conclusion: Interventions for EP children may target improving children’s numerical representations in order to subsequently remediate their mathematical skills

    Children's vomiting following posterior fossa surgery: A retrospective study

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Nausea and vomiting is a problem for children after neurosurgery and those requiring posterior fossa procedures appear to have a high incidence. This clinical observation has not been quantified nor have risk factors unique to this group of children been elucidated.</p> <p>Methods</p> <p>A six year retrospective chart audit at two Canadian children's hospitals was conducted. The incidence of nausea and vomiting was extracted. Hierarchical multivariable logistic regression was used to quantify risk and protective factors at 120 hours after surgery and early vs. late vomiting.</p> <p>Results</p> <p>The incidence of vomiting over a ten day postoperative period was 76.7%. Documented vomiting ranged from single events to greater than 20 over the same period. In the final multivariable model: adolescents (age 12 to <17) were less likely to vomit by 120 hours after surgery than other age groups; those who received desflurane, when compared to all other volatile anesthetics, were more likely to vomit, yet the use of ondansetron with desflurane decre kelihood. Children who had intraoperative ondansetron were more likely to vomit in the final multivariable model (perhaps because of its use, in the clinical judgment of the anesthesiologist, for children considered at risk). Children who started vomiting in the first 24 hours were more likely to be school age (groups 4 to <7 and 7 to <12) and receive desflurane. Nausea was not well documented and was therefore not analyzed.</p> <p>Conclusion</p> <p>The incidence of vomiting in children after posterior fossa surgery is sufficient to consider all children requiring these procedures to be at high risk for POV. Nausea requires better assessment and documentation.</p

    Utilisation of an operative difficulty grading scale for laparoscopic cholecystectomy

    Get PDF
    Background A reliable system for grading operative difficulty of laparoscopic cholecystectomy would standardise description of findings and reporting of outcomes. The aim of this study was to validate a difficulty grading system (Nassar scale), testing its applicability and consistency in two large prospective datasets. Methods Patient and disease-related variables and 30-day outcomes were identified in two prospective cholecystectomy databases: the multi-centre prospective cohort of 8820 patients from the recent CholeS Study and the single-surgeon series containing 4089 patients. Operative data and patient outcomes were correlated with Nassar operative difficultly scale, using Kendall’s tau for dichotomous variables, or Jonckheere–Terpstra tests for continuous variables. A ROC curve analysis was performed, to quantify the predictive accuracy of the scale for each outcome, with continuous outcomes dichotomised, prior to analysis. Results A higher operative difficulty grade was consistently associated with worse outcomes for the patients in both the reference and CholeS cohorts. The median length of stay increased from 0 to 4 days, and the 30-day complication rate from 7.6 to 24.4% as the difficulty grade increased from 1 to 4/5 (both p < 0.001). In the CholeS cohort, a higher difficulty grade was found to be most strongly associated with conversion to open and 30-day mortality (AUROC = 0.903, 0.822, respectively). On multivariable analysis, the Nassar operative difficultly scale was found to be a significant independent predictor of operative duration, conversion to open surgery, 30-day complications and 30-day reintervention (all p < 0.001). Conclusion We have shown that an operative difficulty scale can standardise the description of operative findings by multiple grades of surgeons to facilitate audit, training assessment and research. It provides a tool for reporting operative findings, disease severity and technical difficulty and can be utilised in future research to reliably compare outcomes according to case mix and intra-operative difficulty

    Rapid Dopaminergic Modulation of the Fish Hypothalamic Transcriptome and Proteome

    Get PDF
    Background - Dopamine (DA) is a major neurotransmitter playing an important role in the regulation of vertebrate reproduction. We developed a novel method for the comparison of transcriptomic and proteomic data obtained from in vivo experiments designed to study the neuroendocrine actions of DA. // Methods and Findings - Female goldfish were injected (i.p.) with DA agonists (D1-specific; SKF 38393, or D2-specific; LY 171555) and sacrificed after 5 h. Serum LH levels were reduced by 57% and 75% by SKF 38393 and LY 171555, respectively, indicating that the treatments produced physiologically relevant responses in vivo. Bioinformatic strategies and a ray-finned fish database were established for microarray and iTRAQ proteomic analysis of the hypothalamus, revealing a total of 3088 mRNAs and 42 proteins as being differentially regulated by the treatments. Twenty one proteins and mRNAs corresponding to these proteins appeared on both lists. Many of the mRNAs and proteins affected by the treatments were grouped into the Gene Ontology categorizations of protein complex, signal transduction, response to stimulus, and regulation of cellular processes. There was a 57% and 14% directional agreement between the differentially-regulated mRNAs and proteins for SKF 38393 and LY 171555, respectively. // Conclusions - The results demonstrate the applicability of advanced high-throughput genomic and proteomic analyses in an amendable well-studied teleost model species whose genome has yet to be sequenced. We demonstrate that DA rapidly regulates multiple hypothalamic pathways and processes that are also known to be involved in pathologies of the central nervous system

    Health workforce development planning in the Sultanate of Oman: a case study

    Get PDF
    <p>Abstract</p> <p>Introduction</p> <p>Oman's recent experience in health workforce development may be viewed against the backdrop of the situation just three or four decades ago, when it had just a few physicians and nurses (mostly expatriate). All workforce categories in Oman have grown substantially over the last two decades. Increased self-reliance was achieved despite substantial growth in workforce stocks. Stocks of physicians and nurses grew significantly during 1985–2007. This development was the outcome of well-considered national policies and plans. This case outlines how Oman is continuing to turn around its excessive dependence on expatriate workforce through strategic workforce development planning.</p> <p>Case description</p> <p>The Sultanate's early development initiatives focused on building a strong health care infrastructure by importing workforce. However, the policy-makers stressed national workforce development for a sustainable future. Beginning with the formulation of a strategic health workforce development plan in 1991, the stage was set for adopting workforce planning as an essential strategy for sustainable health development and workforce self-reliance. Oman continued to develop its educational infrastructure, and began to produce as much workforce as possible, in order to meet health care demands and achieve workforce self-reliance.</p> <p>Other policy initiatives with a beneficial impact on Oman's workforce development scenario were: regionalization of nursing institutes, active collaboration with universities and overseas specialty boards, qualitative improvement of the education system, development of a strong continuing professional development system, efforts to improve workforce management, planned change management and needs-based micro/macro-level studies. Strong political will and bold policy initiatives, dedicated workforce planning and educational endeavours have all contributed to help Oman to develop its health workforce stocks and gain self-reliance.</p> <p>Discussion and evaluation</p> <p>Oman has successfully innovated workforce planning within a favorable policy environment. Its intensive and extensive workforce planning efforts, with the close involvement of policy-makers, educators and workforce managers, have ensured adequacy of suitable workforce in health institutions and its increased self-reliance in the health workforce.</p> <p>Conclusion</p> <p>Oman's experience in workforce planning and development presents an illustration of a country benefiting from successful application of workforce planning concepts and tools. Instead of being complacent about its achievements so far, every country needs to improve or sustain its planning efforts in this way, in order to circumvent the current workforce deficiencies and to further increase self-reliance and improve workforce efficiency and effectiveness.</p

    Population‐based cohort study of outcomes following cholecystectomy for benign gallbladder diseases

    Get PDF
    Background The aim was to describe the management of benign gallbladder disease and identify characteristics associated with all‐cause 30‐day readmissions and complications in a prospective population‐based cohort. Methods Data were collected on consecutive patients undergoing cholecystectomy in acute UK and Irish hospitals between 1 March and 1 May 2014. Potential explanatory variables influencing all‐cause 30‐day readmissions and complications were analysed by means of multilevel, multivariable logistic regression modelling using a two‐level hierarchical structure with patients (level 1) nested within hospitals (level 2). Results Data were collected on 8909 patients undergoing cholecystectomy from 167 hospitals. Some 1451 cholecystectomies (16·3 per cent) were performed as an emergency, 4165 (46·8 per cent) as elective operations, and 3293 patients (37·0 per cent) had had at least one previous emergency admission, but had surgery on a delayed basis. The readmission and complication rates at 30 days were 7·1 per cent (633 of 8909) and 10·8 per cent (962 of 8909) respectively. Both readmissions and complications were independently associated with increasing ASA fitness grade, duration of surgery, and increasing numbers of emergency admissions with gallbladder disease before cholecystectomy. No identifiable hospital characteristics were linked to readmissions and complications. Conclusion Readmissions and complications following cholecystectomy are common and associated with patient and disease characteristics

    Evaluation of appendicitis risk prediction models in adults with suspected appendicitis

    Get PDF
    Background Appendicitis is the most common general surgical emergency worldwide, but its diagnosis remains challenging. The aim of this study was to determine whether existing risk prediction models can reliably identify patients presenting to hospital in the UK with acute right iliac fossa (RIF) pain who are at low risk of appendicitis. Methods A systematic search was completed to identify all existing appendicitis risk prediction models. Models were validated using UK data from an international prospective cohort study that captured consecutive patients aged 16–45 years presenting to hospital with acute RIF in March to June 2017. The main outcome was best achievable model specificity (proportion of patients who did not have appendicitis correctly classified as low risk) whilst maintaining a failure rate below 5 per cent (proportion of patients identified as low risk who actually had appendicitis). Results Some 5345 patients across 154 UK hospitals were identified, of which two‐thirds (3613 of 5345, 67·6 per cent) were women. Women were more than twice as likely to undergo surgery with removal of a histologically normal appendix (272 of 964, 28·2 per cent) than men (120 of 993, 12·1 per cent) (relative risk 2·33, 95 per cent c.i. 1·92 to 2·84; P < 0·001). Of 15 validated risk prediction models, the Adult Appendicitis Score performed best (cut‐off score 8 or less, specificity 63·1 per cent, failure rate 3·7 per cent). The Appendicitis Inflammatory Response Score performed best for men (cut‐off score 2 or less, specificity 24·7 per cent, failure rate 2·4 per cent). Conclusion Women in the UK had a disproportionate risk of admission without surgical intervention and had high rates of normal appendicectomy. Risk prediction models to support shared decision‐making by identifying adults in the UK at low risk of appendicitis were identified

    Effects of different cultivated or weed grasses, grown as pure stands or in combination with wheat, on take-all and its suppression in subsequent wheat crops

    No full text
    Grass species were grown in plots, as pure stands or mixed with wheat, after a sequence of wheat crops in which take-all (Gaeumannomyces graminis var. tritici) had developed. Annual brome grasses maintained take-all inoculum in the soil as well as wheat (grown as a continuous sequence), and much better than cultivated species with a perennial habit. Take-all developed more in wheat grown after Anisantha sterilis (barren brome) or Bromus secalinus (rye brome), with or without wheat, than in continuous grass-free wheat in the same year, where take-all decline was apparently occurring. It was equally or more severe, however, in wheat grown after Lolium perenne (rye-grass) or Festuca arundinacea (tall fescue), despite these species having left the least inoculum in the soil. It was most severe in plots where these two grasses had been grown as mixtures with wheat. It is postulated that the presence of these grasses inhibited the development of take-all-suppressive microbiota that had developed in the grass-free wheat crops. The effects of the grasses appeared to be temporary, as amounts of take-all in a second subsequent winter wheat test crop were similar after all treatments. These results have important implications for take-all risk in wheat and, perhaps, other cereal crops grown after grass weed-infested cereals or after set-aside or similar 1-year covers containing weeds or sown grasses, especially in combination with cereal volunteers. They also indicate that grasses might be used experimentally in wheat crop sequences for investigating the mechanisms of suppression of, and conduciveness to, take-all
    corecore