246 research outputs found

    Social complexity in bees is not sufficient to explain lack of reversions to solitary living over long time scales

    Get PDF
    BackgroundThe major lineages of eusocial insects, the ants, termites, stingless bees, honeybees and vespid wasps, all have ancient origins (> or = 65 mya) with no reversions to solitary behaviour. This has prompted the notion of a 'point of no return' whereby the evolutionary elaboration and integration of behavioural, genetic and morphological traits over a very long period of time leads to a situation where reversion to solitary living is no longer an evolutionary option.ResultsWe show that in another group of social insects, the allodapine bees, there was a single origin of sociality > 40 mya. We also provide data on the biology of a key allodapine species, Halterapis nigrinervis, showing that it is truly social. H. nigrinervis was thought to be the only allodapine that was not social, and our findings therefore indicate that there have been no losses of sociality among extant allodapine clades. Allodapine colony sizes rarely exceed 10 females per nest and all females in virtually all species are capable of nesting and reproducing independently, so these bees clearly do not fit the 'point of no return' concept.ConclusionWe argue that allodapine sociality has been maintained by ecological constraints and the benefits of alloparental care, as opposed to behavioural, genetic or morphological constraints to independent living. Allodapine brood are highly vulnerable to predation because they are progressively reared in an open nest (not in sealed brood cells), which provides potentially large benefits for alloparental care and incentives for reproductives to tolerate potential alloparents. We argue that similar vulnerabilities may also help explain the lack of reversions to solitary living in other taxa with ancient social origins.Luke B. Chenoweth, Simon M. Tierney, Jaclyn A. Smith, Steven J.B. Cooper and Michael P. Schwar

    Are youth mentoring programs good value-for-money? An evaluation of the Big Brothers Big Sisters Melbourne Program

    Get PDF
    Background : The Big Brothers Big Sisters (BBBS) program matches vulnerable young people with a trained, supervised adult volunteer as mentor. The young people are typically seriously disadvantaged, with multiple psychosocial problems.Methods : Threshold analysis was undertaken to determine whether investment in the program was a worthwhile use of limited public funds. The potential cost savings were based on US estimates of life-time costs associated with high-risk youth who drop out-of-school and become adult criminals. The intervention was modelled for children aged 10&ndash;14 years residing in Melbourne in 2004.Results : If the program serviced 2,208 of the most vulnerable young people, it would cost AUD 39.5 M. Assuming 50% were high-risk, the associated costs of their adult criminality would be AUD 3.3 billion. To break even, the program would need to avert high-risk behaviours in only 1.3% (14/1,104) of participants.Conclusion : This indicative evaluation suggests that the BBBS program represents excellent \u27value for money\u27.<br /

    Sex/Gender and Socioeconomic Differences in the Predictive Ability of Self-Rated Health for Mortality

    Get PDF
    Background: Studies have reported that the predictive ability of self-rated health (SRH) for mortality varies by sex/gender and socioeconomic group. The purpose of this study is to evaluate this relationship in Japan and explore the potential reasons for differences between the groups. Methodology/Principal Findings: The analyses in the study were based on the Aichi Gerontological Evaluation Study's (AGES) 2003 Cohort Study in Chita Peninsula, Japan, which followed the four-year survival status of 14,668 community-dwelling people who were at least 65 years old at the start of the study. We first examined sex/gender and education-level differences in association with fair/poor SRH. We then estimated the sex/gender- and education-specific hazard ratios (HRs) of mortality associated with lower SRH using Cox models. Control variables, including health behaviors (smoking and drinking), symptoms of depression, and chronic co-morbid conditions, were added to sequential regression models. The results showed men and women reported a similar prevalence of lower SRH. However, lower SRH was a stronger predictor of mortality in men (HR = 2.44 [95% confidence interval (CI): 2.14–2.80]) than in women (HR = 1.88 [95% CI: 1.44–2.47]; p for sex/gender interaction = 0.018). The sex/gender difference in the predictive ability of SRH was progressively attenuated with the additional introduction of other co-morbid conditions. The predictive ability among individuals with high school education (HR = 2.39 [95% CI: 1.74–3.30]) was similar to that among individuals with less than a high school education (HR = 2.14 [95% CI: 1.83–2.50]; p for education interaction = 0.549). Conclusions: The sex/gender difference in the predictive ability of SRH for mortality among this elderly Japanese population may be explained by male/female differences in what goes into an individual's assessment of their SRH, with males apparently weighting depressive symptoms more than females

    Can computerized clinical decision support systems improve practitioners' diagnostic test ordering behavior? A decision-maker-researcher partnership systematic review

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Underuse and overuse of diagnostic tests have important implications for health outcomes and costs. Decision support technology purports to optimize the use of diagnostic tests in clinical practice. The objective of this review was to assess whether computerized clinical decision support systems (CCDSSs) are effective at improving ordering of tests for diagnosis, monitoring of disease, or monitoring of treatment. The outcome of interest was effect on the diagnostic test-ordering behavior of practitioners.</p> <p>Methods</p> <p>We conducted a decision-maker-researcher partnership systematic review. We searched MEDLINE, EMBASE, Ovid's EBM Reviews database, Inspec, and reference lists for eligible articles published up to January 2010. We included randomized controlled trials comparing the use of CCDSSs to usual practice or non-CCDSS controls in clinical care settings. Trials were eligible if at least one component of the CCDSS gave suggestions for ordering or performing a diagnostic procedure. We considered studies 'positive' if they showed a statistically significant improvement in at least 50% of test ordering outcomes.</p> <p>Results</p> <p>Thirty-five studies were identified, with significantly higher methodological quality in those published after the year 2000 (<it>p </it>= 0.002). Thirty-three trials reported evaluable data on diagnostic test ordering, and 55% (18/33) of CCDSSs improved testing behavior overall, including 83% (5/6) for diagnosis, 63% (5/8) for treatment monitoring, 35% (6/17) for disease monitoring, and 100% (3/3) for other purposes. Four of the systems explicitly attempted to reduce test ordering rates and all succeeded. Factors of particular interest to decision makers include costs, user satisfaction, and impact on workflow but were rarely investigated or reported.</p> <p>Conclusions</p> <p>Some CCDSSs can modify practitioner test-ordering behavior. To better inform development and implementation efforts, studies should describe in more detail potentially important factors such as system design, user interface, local context, implementation strategy, and evaluate impact on user satisfaction and workflow, costs, and unintended consequences.</p

    Do Electronic Health Records Help or Hinder Medical Education?

    Get PDF
    Many countries worldwide are digitizing patients' medical records. What impact will these electronic health records have upon medical education? This debate examines the threats and opportunities

    Implementation and evaluation of a nurse-centered computerized potassium regulation protocol in the intensive care unit - a before and after analysis

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Potassium disorders can cause major complications and must be avoided in critically ill patients. Regulation of potassium in the intensive care unit (ICU) requires potassium administration with frequent blood potassium measurements and subsequent adjustments of the amount of potassium administrated. The use of a potassium replacement protocol can improve potassium regulation. For safety and efficiency, computerized protocols appear to be superior over paper protocols. The aim of this study was to evaluate if a computerized potassium regulation protocol in the ICU improved potassium regulation.</p> <p>Methods</p> <p>In our surgical ICU (12 beds) and cardiothoracic ICU (14 beds) at a tertiary academic center, we implemented a nurse-centered computerized potassium protocol integrated with the pre-existent glucose control program called GRIP (Glucose Regulation in Intensive Care patients). Before implementation of the computerized protocol, potassium replacement was physician-driven. Potassium was delivered continuously either by central venous catheter or by gastric, duodenal or jejunal tube. After every potassium measurement, nurses received a recommendation for the potassium administration rate and the time to the next measurement. In this before-after study we evaluated potassium regulation with GRIP. The attitude of the nursing staff towards potassium regulation with computer support was measured with questionnaires.</p> <p>Results</p> <p>The patient cohort consisted of 775 patients before and 1435 after the implementation of computerized potassium control. The number of patients with hypokalemia (<3.5 mmol/L) and hyperkalemia (>5.0 mmol/L) were recorded, as well as the time course of potassium levels after ICU admission. The incidence of hypokalemia and hyperkalemia was calculated. Median potassium-levels were similar in both study periods, but the level of potassium control improved: the incidence of hypokalemia decreased from 2.4% to 1.7% (P < 0.001) and hyperkalemia from 7.4% to 4.8% (P < 0.001). Nurses indicated that they considered computerized potassium control an improvement over previous practice.</p> <p>Conclusions</p> <p>Computerized potassium control, integrated with the nurse-centered GRIP program for glucose regulation, is effective and reduces the prevalence of hypo- and hyperkalemia in the ICU compared with physician-driven potassium regulation.</p

    Computerized clinical decision support systems for drug prescribing and management: A decision-maker-researcher partnership systematic review

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Computerized clinical decision support systems (CCDSSs) for drug therapy management are designed to promote safe and effective medication use. Evidence documenting the effectiveness of CCDSSs for improving drug therapy is necessary for informed adoption decisions. The objective of this review was to systematically review randomized controlled trials assessing the effects of CCDSSs for drug therapy management on process of care and patient outcomes. We also sought to identify system and study characteristics that predicted benefit.</p> <p>Methods</p> <p>We conducted a decision-maker-researcher partnership systematic review. We updated our earlier reviews (1998, 2005) by searching MEDLINE, EMBASE, EBM Reviews, Inspec, and other databases, and consulting reference lists through January 2010. Authors of 82% of included studies confirmed or supplemented extracted data. We included only randomized controlled trials that evaluated the effect on process of care or patient outcomes of a CCDSS for drug therapy management compared to care provided without a CCDSS. A study was considered to have a positive effect (<it>i.e.</it>, CCDSS showed improvement) if at least 50% of the relevant study outcomes were statistically significantly positive.</p> <p>Results</p> <p>Sixty-five studies met our inclusion criteria, including 41 new studies since our previous review. Methodological quality was generally high and unchanged with time. CCDSSs improved process of care performance in 37 of the 59 studies assessing this type of outcome (64%, 57% of all studies). Twenty-nine trials assessed patient outcomes, of which six trials (21%, 9% of all trials) reported improvements.</p> <p>Conclusions</p> <p>CCDSSs inconsistently improved process of care measures and seldomly improved patient outcomes. Lack of clear patient benefit and lack of data on harms and costs preclude a recommendation to adopt CCDSSs for drug therapy management.</p
    corecore